442 research outputs found

    Signal reconstruction by means of Embedding, Clustering and AutoEncoder Ensembles

    Get PDF
    We study the denoising and reconstruction of corrupted signals by means of AutoEncoder ensembles. In order to guarantee experts' diversity in the ensemble, we apply, prior to learning, a dimensional reduction pass (to map the examples into a suitable Euclidean space) and a partitional clustering pass: each cluster is then used to train a distinct AutoEncoder. We study the approach with an audio file benchmark: the original signals are artificially corrupted by Doppler effect and reverb. The results support the comparative effectiveness of the approach, w.r.t. the approach based on a single AutoEncoder. The processing pipeline using Local Linear Embedding, k means, then k Convolutional Denoising AutoEncoders reduces the reconstruction error by 35% w.r.t. the baseline approach

    From samples to populations in retinex models

    Get PDF
    Some spatial color algorithms, such as Brownian Milano retinex (MI-retinex) and random spray retinex (RSR), are based on sampling. In Brownian MI-retinex, memoryless random walks (MRWs) explore the neighborhood of a pixel and are then used to compute its output. Considering the relative redundancy and inefficiency of MRW exploration, the algorithm RSR replaced the walks by samples of points (the sprays). Recent works point to the fact that a mapping from the sampling formulation to the probabilistic formulation of the corresponding sampling process can offer useful insights into the models, at the same time featuring intrinsically noise-free outputs. The paper continues the development of this concept and shows that the population-based versions of RSR and Brownian MI-retinex can be used to obtain analytical expressions for the outputs of some test images. The comparison of the two analytic expressions from RSR and from Brownian MI-retinex demonstrates not only that the two outputs are, in general, different but also that they depend, in a qualitatively different way, upon the features of the image

    Improving probabilistic flooding using topological indexes

    Get PDF
    Unstructured networks are characterized by constrained resources and require protocols that efficiently utilize bandwidth and battery power. Probabilistic flooding, allows nodes to rebroadcast RREQ packets with some probability p, thus reducing the overhead. The key issue in of this algorithm consists of determining p. The techniques proposed so far either use a fixed p determined by a priori considerations, or a p variable from one node to the other - set, for instance based on node degree or distance between source and destination - or even a dynamic p based on the number of redundant messages received by the nodes. In order to make the computation of forwarding probability p works optimally regardless of changing of topology, we propose to set p based on the node role within the message dissemination process. Specifically, we propose to identify such role based on the nodes' clustering coefficients (the lower the coefficient, the higher the forwarding probability). The performance of the algorithm is evaluated in terms of routing overhead, packet delivery ratio, and end-to-end delay. The algorithm pays a price in terms of computation time for discovering the clustering coefficient, however reduces unnecessary and redundant control messages and achieves a significant improvements in both dense and sparse networks in terms of packet delivery ratio. We compare by simulation the performance of this algorithm with the one of the most representative competing algorithms

    A Deep Learning Approach to Radio Signal Denoising

    Get PDF
    This paper proposes a Deep Learning approach to radio signal de-noising. This approach is data-driven, thus it allows de-noising signals, corresponding to distinct protocols, without requiring explicit use of expert knowledge, in this way granting higher flexibility. The core component of the Artificial Neural Network architecture used in this work is a Convolutional De-noising AutoEncoder. We report about the performance of the system in spectrogram-based denoising of the protocol preamble across protocols of the IEEE 802.11 family, studied using simulation data. This approach can be used within a machine learning pipeline: the denoised data can be fed to a protocol classifier. A further perspective advantage of using the AutoEncoders in such a pipeline is that they can be co-trained with the downstream classifier (protocol detector), to optimize its accuracy

    Using autoencoders for radio signal denoising

    Get PDF
    We investigated the use of a Deep Learning approach to radio signal de-noising. This data-driven approach has does not require explicit use of expert knowledge to set up the parameters of the denoising procedure and grants great flexibility across many channel conditions. The core component used in this work is a Convolutional De-noising AutoEncoder, known to be very effective in image processing. The key of our approach consists in transforming the radio signal into a representation suitable to the CDAE: we transform the time-domain signal into a 2D signal using the Short Time Fourier Transform. We report about the performance of the approach in preamble denoising across protocols of the IEEE 802.11 family, studied using simulation data. This approach could be used within a machine learning pipeline: the denoised data can be fed to a protocol classifier. A perspective advantage of using the AutoEncoders in that pipeline is that they can be co-trained with the downstream classifier, to optimize the classification accuracy

    A cryptographic cloud-based approach for the mitigation of the airline cargo cancellation problem

    Get PDF
    In order to keep in good long-term relationships with their main customers, Airline Cargo companies do not impose any fee for last minute cancellations of shipments. As a result, customers can book the same shipment on several cargo companies. Cargo companies try to balance cancellations by a corresponding volume of overbooking. However, the considerable uncertainty in the number of cancellations does not allow to fine-tune the optimal overbooking level, causing losses. In this work, we show how the deployment of cryptographic techniques, enabling the computation on private information of customers and companies data can improve the overall service chain, allowing for striking and enforcing better agreements. We propose a query system based on proxy re-encryption and show how the relevant information can be extracted, still preserving the privacy of customers\u2019 data. Furthermore, we provide a Game Theoretic model of the use case scenario and show that it allows a more accurate estimate of the cancellation rates. This supports the reduction of the uncertainty and allows to better tune the overbooking level

    QBRIX : a quantile-based approach to retinex

    Get PDF
    In this paper, we introduce a novel probabilistic version of retinex. It is based on a probabilistic formalization of the random spray retinex sampling and contributes to the investigation of the spatial properties of the model. Various versions available of the retinex algorithm are characterized by different procedures for exploring the image content (so as to obtain, for each pixel, a reference white value), then used to rescale the pixel lightness. Here we propose an alternative procedure, which computes the reference white value from the percentile values of the pixel population. We formalize two versions of the algorithm: one with global and one with local behavior, characterized by different computational costs

    K-Means Clustering in Dual Space for Unsupervised Feature Partitioning in Multi-view Learning

    Get PDF
    In contrast to single-view learning, multi-view learning trains simultaneously distinct algorithms on disjoint subsets of features (the views), and jointly optimizes them, so that they come to a consensus. Multi-view learning is typically used when the data are described by a large number of features. It aims at exploiting the different statistical properties of distinct views. A task to be performed before multi-view learning - in the case where the features have no natural groupings - is multi-view generation (MVG): it consists in partitioning the feature set in subsets (views) characterized by some desired properties. Given a dataset, in the form of a table with a large number of columns, the desired solution of the MVG problem is a partition of the columns that optimizes an objective function, encoding typical requirements. If the class labels are available, one wants to minimize the inter-view redundancy in target prediction and maximize consistency. If the class labels are not available, one wants simply to minimize inter-view redundancy (minimize the information each view has about the others). In this work, we approach the MVG problem in the latter, unsupervised, setting. Our approach is based on the transposition of the data table: the original instance rows are mapped into columns (the 'pseudo-features'), while the original feature columns become rows (the 'pseudo-instances'). The latter can then be partitioned by any suitable standard instance-partitioning algorithm: the resulting groups can be considered as groups of the original features, i.e. views, solution of the MVG problem. We demonstrate the approach using k-means and the standard benchmark MNIST dataset of handwritten digits

    Review and Comparison of Random Spray Retinex and of its variants STRESS and QBRIX

    Get PDF
    In this paper, we review and compare three spatial color algorithms of the Milano Retinex family: Random Spray Retinex (RSR) and its subsequent variants STRESS and QBRIX. These algorithms process the colors of any input image in line with the principles of the Retinex theory, introduced about 50 years ago by Land and McCann to explain how humans see colors. According to this theory, RSR, STRESS and QBRIX re-scale independently the color intensities of each pixel by a quantity, named local reference white, which depends on the spatial arrangement of the colors in the pixel surround. The output is a new color enhanced image that generally has a higher brightness and more visible details than the input one. RSR, STRESS and QBRIX adopt different models of spatial arrangement and implement different equations for the computation of the local reference white, so that they produce different enhanced images. We propose a comparative analysis of their performance based on numerical measures of the image brightness, details and dynamic range. In order to enable result repeatability and further comparisons, we use a set of images publicly available on the net
    • …
    corecore